This website uses cookies. If you use our website, you consent to our use of cookies. To data privacy statement
Manual software testing is an important part of software development where testers manually test the software. This ensures that the software meets the specified requirements and works without errors. To make the work of our testers easier, we have developed a web-based tool to support manual testing. The tool is used to simplify the permanent storage of test results and the creation of test runs.
In the past, we have tried various tools to support manual testing, but the applications were not satisfactory. The tools were often too large and had too many features, which made the application too complicated or did not work properly. That's why we came up with the idea of developing a tool that would meet our exact requirements and provide the functionality we needed.
Specifications based on behavioral software development are used for software testing. This is an agile software development methodology based on collaboration between developers, testers, and non-technical stakeholders. By clearly describing the desired behavior of the software in a natural language, it can be understood by everyone involved. In this case, the simple description language Gherkin is used, which makes it possible to describe different scenarios with very few rules and structured formulations. The focus is on the simplest and least formal way to describe the scenarios that represent the technical behavior of a software feature as concrete examples.
Multiple scenarios make up a feature, which is then recorded in a feature file. These feature files are used by testers to test the software.
Figure 1: An example of a Gherkin scenario
Several scenarios form a feature, which is then recorded in a feature file. These feature files are used by the testers to test the software.
Before a test run can take place, the testers must be given access to the feature files. The testers then select the relevant scenarios and arrange them in a meaningful order for testing. The test can then be started. The results are documented. This is where the web-based tool comes in. The feature files are all automatically loaded into the tool and are immediately available. In addition, the feature files are all listed and clearly displayed.
Figure 2: Test run
Figure 2 shows the ability to perform a test run. During a test run, the feature files are run one at a time and a result is recorded for each feature file. A result can be Passed, Failed, or Skipped. When a test run is complete, the data is automatically saved so that it is available after the test run is complete. By automatically loading the feature files and automatically saving a test run, the process should be simplified and accelerated.
Since not all feature file scenarios can be manually tested, and since it usually makes sense to test scenarios in a certain order, it is possible to create a test plan. Filters can be used to select the feature files and scenarios to be manually tested. In addition, the order in which the scenarios are run during testing can be configured using a sort order. It is also possible to specify descriptive texts for the test plan and individual scenarios, e.g. to provide testers with detailed instructions. This should make it possible to always start certain software tests with the same test plan to have a ready-made configuration for identical tests. Figure 3: Creating a test plan
Software tests are typically run with a variety of devices, the use of which can be recorded during the test run for accurate documentation. Devices can be added, removed, and edited using the Device Manager.
Figure 4: Device management
Overall, the feedback from the testers shows a strong response to our current features and provides valuable insights for future development. We are committed to continuing to optimize the tool by implementing the ability to save results at the scenario level and direct bug reporting via an issue tracker.
* Mandatory